在本文中,我们专注于使用配备有视觉传感器的移动机器人(例如RGBD摄像头)有效地定位使用自由形式语言描述的目标对象的问题。常规的活动视觉搜索预测了一组搜索的对象,在实践中构成了这些技术的限制。为了在主动视觉搜索中提供更多的灵活性,我们提出了一个系统,用户可以使用自由形式的语言输入目标命令;我们称此系统零击目录的视觉搜索(Zavis)。 Zavis检测并计划搜索用户通过静态地标(例如桌子或床)表示的语义网格图输入的目标对象。为了有效地计划对象搜索模式,Zavis考虑了基于常识性知识的共发生和预测性不确定性,同时决定首先访问哪些地标。我们在模拟和现实世界环境中验证了有关SR(成功率)和SPL(成功加权)的建议方法。所提出的方法在模拟方案中的SPL优于先前的方法,平均差距为0.283。我们进一步证明了Zavis在现实世界中使用先锋3AT机器人。
translated by 谷歌翻译
Recent directions for offensive language detection are hierarchical modeling, identifying the type and the target of offensive language, and interpretability with offensive span annotation and prediction. These improvements are focused on English and do not transfer well to other languages because of cultural and linguistic differences. In this paper, we present the Korean Offensive Language Dataset (KOLD) comprising 40,429 comments, which are annotated hierarchically with the type and the target of offensive language, accompanied by annotations of the corresponding text spans. We collect the comments from NAVER news and YouTube platform and provide the titles of the articles and videos as the context information for the annotation process. We use these annotated comments as training data for Korean BERT and RoBERTa models and find that they are effective at offensiveness detection, target classification, and target span detection while having room for improvement for target group classification and offensive span detection. We discover that the target group distribution differs drastically from the existing English datasets, and observe that providing the context information improves the model performance in offensiveness detection (+0.3), target classification (+1.5), and target group classification (+13.1). We publicly release the dataset and baseline models.
translated by 谷歌翻译
本文是关于我们的系统提交给生物重建VII轨道2挑战的化学识别任务的技术报告。这一挑战的主要特点是数据包括全文文章,而当前数据集通常由只有标题和摘要组成。为了有效解决该问题,我们的目的是使用各种方法改进标记一致性和实体覆盖,例如在与命名实体识别(ner)的相同文章中的多数投票和组合字典和神经模型进行归一化的混合方法。在NLM-Chem数据集的实验中,我们表明我们的方法改善了模型的性能,特别是在召回方面。最后,在对挑战的官方评估中,我们的系统通过大幅表现出基线模型和来自16支队伍的超过80个提交来排名第一。
translated by 谷歌翻译
强大的学习方法旨在从嘈杂和损坏的训练数据中学习清洁的目标分布,其中经常假设特定的损坏模式是先验的。我们所提出的方法不仅可以成功地从脏数据集中学习清洁目标分布,还可以估计潜在的噪声模式。为此,我们利用了专家模型,可以区分两种不同类型的预测性不确定性,炼术和认识性的不确定性。我们表明,估计不确定性的能力在阐明腐败模式时发挥着重要作用,因为这两个目标紧密交织在一起。我们还提出了一种用于评估腐败模式估计性能的新型验证方案。我们提出的方法通过许多域来广泛地评估了鲁棒性和腐败模式估计,包括计算机视觉和自然语言处理。
translated by 谷歌翻译
我们介绍韩语了解评估(KLUE)基准。 Klue是8个韩国自然语言理解(nlu)任务的集合,包括主题分类,语言典的相似性,自然语言推断,命名实体识别,关系提取,依赖解析,机器阅读理解和对话状态跟踪。我们从各种源语料库中展开的所有任务,同时尊重版权,以确保任何没有任何限制的人的可访问性。考虑到道德考虑,我们仔细设计了注释协议。随着基准任务和数据,我们为每个任务提供适用的评估指标和微调配方,为每项任务进行预训练语言模型。我们还释放了预用的语言模型(PLM),Klue-Bert和Klue-Roberta,以帮助在KLUE上再现基线模型,从而促进未来的研究。我们通过拟议的Klue基准套件从初步实验中进行了一些有趣的观察,已经证明了这款新的基准套件的有用性。首先,我们找到了klue-roberta-mantring的其他基线,包括多语种plms和现有的开源韩国plms。其次,即使我们从预先预测语料库中取代个人身份信息,我们也会看到性能下降最小,这表明隐私和NLU能力并不彼此可能。最后,我们发现,使用BPE标记与语素级预象的组合,在涉及语素级标记,检测和发电的任务中是有效的。除了加速韩国人NLP研究外,我们的创建Klue的全面文件将有助于将来为其他语言创建类似的资源。 klue在https://klue-benchmark.com上提供。
translated by 谷歌翻译
The automated segmentation and tracking of macrophages during their migration are challenging tasks due to their dynamically changing shapes and motions. This paper proposes a new algorithm to achieve automatic cell tracking in time-lapse microscopy macrophage data. First, we design a segmentation method employing space-time filtering, local Otsu's thresholding, and the SUBSURF (subjective surface segmentation) method. Next, the partial trajectories for cells overlapping in the temporal direction are extracted in the segmented images. Finally, the extracted trajectories are linked by considering their direction of movement. The segmented images and the obtained trajectories from the proposed method are compared with those of the semi-automatic segmentation and manual tracking. The proposed tracking achieved 97.4% of accuracy for macrophage data under challenging situations, feeble fluorescent intensity, irregular shapes, and motion of macrophages. We expect that the automatically extracted trajectories of macrophages can provide pieces of evidence of how macrophages migrate depending on their polarization modes in the situation, such as during wound healing.
translated by 谷歌翻译
Data-centric AI has shed light on the significance of data within the machine learning (ML) pipeline. Acknowledging its importance, various research and policies are suggested by academia, industry, and government departments. Although the capability of utilizing existing data is essential, the capability to build a dataset has become more important than ever. In consideration of this trend, we propose a "Data Management Operation and Recipes" that will guide the industry regardless of the task or domain. In other words, this paper presents the concept of DMOps derived from real-world experience. By offering a baseline for building data, we want to help the industry streamline its data operation optimally.
translated by 谷歌翻译
According to the rapid development of drone technologies, drones are widely used in many applications including military domains. In this paper, a novel situation-aware DRL- based autonomous nonlinear drone mobility control algorithm in cyber-physical loitering munition applications. On the battlefield, the design of DRL-based autonomous control algorithm is not straightforward because real-world data gathering is generally not available. Therefore, the approach in this paper is that cyber-physical virtual environment is constructed with Unity environment. Based on the virtual cyber-physical battlefield scenarios, a DRL-based automated nonlinear drone mobility control algorithm can be designed, evaluated, and visualized. Moreover, many obstacles exist which is harmful for linear trajectory control in real-world battlefield scenarios. Thus, our proposed autonomous nonlinear drone mobility control algorithm utilizes situation-aware components those are implemented with a Raycast function in Unity virtual scenarios. Based on the gathered situation-aware information, the drone can autonomously and nonlinearly adjust its trajectory during flight. Therefore, this approach is obviously beneficial for avoiding obstacles in obstacle-deployed battlefields. Our visualization-based performance evaluation shows that the proposed algorithm is superior from the other linear mobility control algorithms.
translated by 谷歌翻译
This paper proposes a new regularization algorithm referred to as macro-block dropout. The overfitting issue has been a difficult problem in training large neural network models. The dropout technique has proven to be simple yet very effective for regularization by preventing complex co-adaptations during training. In our work, we define a macro-block that contains a large number of units from the input to a Recurrent Neural Network (RNN). Rather than applying dropout to each unit, we apply random dropout to each macro-block. This algorithm has the effect of applying different drop out rates for each layer even if we keep a constant average dropout rate, which has better regularization effects. In our experiments using Recurrent Neural Network-Transducer (RNN-T), this algorithm shows relatively 4.30 % and 6.13 % Word Error Rates (WERs) improvement over the conventional dropout on LibriSpeech test-clean and test-other. With an Attention-based Encoder-Decoder (AED) model, this algorithm shows relatively 4.36 % and 5.85 % WERs improvement over the conventional dropout on the same test sets.
translated by 谷歌翻译
Affect understanding capability is essential for social robots to autonomously interact with a group of users in an intuitive and reciprocal way. However, the challenge of multi-person affect understanding comes from not only the accurate perception of each user's affective state (e.g., engagement) but also the recognition of the affect interplay between the members (e.g., joint engagement) that presents as complex, but subtle, nonverbal exchanges between them. Here we present a novel hybrid framework for identifying a parent-child dyad's joint engagement by combining a deep learning framework with various video augmentation techniques. Using a dataset of parent-child dyads reading storybooks together with a social robot at home, we first train RGB frame- and skeleton-based joint engagement recognition models with four video augmentation techniques (General Aug, DeepFake, CutOut, and Mixed) applied datasets to improve joint engagement classification performance. Second, we demonstrate experimental results on the use of trained models in the robot-parent-child interaction context. Third, we introduce a behavior-based metric for evaluating the learned representation of the models to investigate the model interpretability when recognizing joint engagement. This work serves as the first step toward fully unlocking the potential of end-to-end video understanding models pre-trained on large public datasets and augmented with data augmentation and visualization techniques for affect recognition in the multi-person human-robot interaction in the wild.
translated by 谷歌翻译